13 research outputs found

    Computing Local Fractal Dimension Using Geographical Weighting Scheme

    Get PDF
    The fractal dimension (D) of a surface can be viewed as a summary or average statistic for characterizing the geometric complexity of that surface. The D values are useful for measuring the geometric complexity of various land cover types. Existing fractal methods only calculate a single D value for representing the whole surface. However, the geometric complexity of a surface varies across patches and a single D value is insufficient to capture these detailed variations. Previous studies have calculated local D values using a moving window technique. The main purpose of this study is to compute local D values using an alternative way by incorporating the geographical weighting scheme within the original global fractal methods. Three original fractal methods are selected in this study: the Triangular Prism method, the Differential Box Counting method and the Fourier Power Spectral Density method. A Gaussian density kernel function is used for the local adaption purpose and various bandwidths are tested. The first part of this dissertation research explores and compares both of the global and local D values of these three methods using test images. The D value is computed for every single pixel across the image to show the surface complexity variation. In the second part of the dissertation, the main goal is to study two major U.S. cities located in two regions. New York City and Houston are compared using D values for both of spatial and temporal comparison. The results show that the geographical weighting scheme is suitable for calculating local D values but very sensitive to small bandwidths. New York City and Houston show similar global D results for both year of 2000 and 2016 indicating there were not much land cover changes during the study period

    Overall study of solar simulation optical system with large irradiated surface using free-form concentrator to improve uniformity

    No full text
    Summary: Large irradiation surface solar simulator often has the problem of low irradiation uniformity. Therefore, a method for designing a large irradiation surface solar simulator with high irradiation uniformity is proposed. According to the law of conservation of energy and the edge-ray principle of non-imaging optics, the free-form surface concentrator is designed and optimized by using the simulated annealing algorithm based on Bessel curve to improve the incident beam uniformity of the integrator. The optical integrator and projection system are also designed and optimized to eliminate aberrations, improve light efficiency, and enlarge the irradiation area. The design is verified using LightTools software and achieves an effective irradiation size of Φ1200 mm with an irradiance of a solar constant and an irradiation uniformity of less than 2.0%. This study provides accurate and reliable solar irradiation for laboratory calibration and performance testing of spacecraft payloads

    Energy-Based Adversarial Example Detection for SAR Images

    No full text
    Adversarial examples (AEs) bring increasing concern on the security of deep-learning-based synthetic aperture radar (SAR) target recognition systems. SAR AEs with perturbation constrained to the vicinity of the target have been recently in the spotlight due to the physical realization prospects. However, current adversarial detection methods generally suffer severe performance degradation against SAR AEs with region-constrained perturbation. To solve this problem, we treated SAR AEs as low-probability samples incompatible with the clean dataset. With the help of energy-based models, we captured an inherent energy gap between SAR AEs and clean samples that is robust to the changes of the perturbation region. Inspired by this discovery, we propose an energy-based adversarial detector, which requires no modification to a pretrained model. To better distinguish the clean samples and AEs, energy regularization was adopted to fine-tune the pretrained model. Experiments demonstrated that the proposed method significantly boosts the detection performance against SAR AEs with region-constrained perturbation

    A concentration-based approach to data classification for choropleth mapping

    No full text
    <div><p>The choropleth map is a device used for the display of socioeconomic data associated with an areal partition of geographic space. Cartographers emphasize the need to standardize any raw count data by an area-based total before displaying the data in a choropleth map. The standardization process converts the raw data from an absolute measure into a relative measure. However, there is recognition that the standardizing process does not enable the map reader to distinguish between low–low and high–high numerator/denominator differences. This research uses concentration-based classification schemes using Lorenz curves to address some of these issues. A test data set of nonwhite birth rate by county in North Carolina is used to demonstrate how this approach differs from traditional mean–variance-based systems such as the Jenks’ optimal classification scheme.</p></div

    Multi-Objective Optimization of CNC Turning Process Parameters Considering Transient-Steady State Energy Consumption

    No full text
    Energy-saving and emission reduction are recognized as the primary measure to tackle the problems associated with climate change, which is one of the major challenges for humanity for the forthcoming decades. Energy modeling and process parameters optimization of machining are effective and powerful ways to realize energy saving in the manufacturing industry. In order to realize high quality and low energy consumption machining of computer numerical control (CNC) lathe, a multi-objective optimization of CNC turning process parameters considering transient-steady state energy consumption is proposed. By analyzing the energy consumption characteristics in the process of machining and introducing practical constraints, such as machine tool equipment performance and tool life, a multi-objective optimization model with turning process parameters as optimization variables and high quality and low energy consumption as optimization objectives is established. The model is solved by non-dominated sorting genetic algorithm-II (NSGA-II), and the pareto optimal solution set of the model is obtained. Finally, the machining process of shaft parts is studied by CK6153i CNC lathe. The results show that 38.3% energy consumption is saved, and the surface roughness of workpiece is reduced by 47.0%, which verifies the effectiveness of the optimization method

    Energy-Based Adversarial Example Detection for SAR Images

    No full text
    Adversarial examples (AEs) bring increasing concern on the security of deep-learning-based synthetic aperture radar (SAR) target recognition systems. SAR AEs with perturbation constrained to the vicinity of the target have been recently in the spotlight due to the physical realization prospects. However, current adversarial detection methods generally suffer severe performance degradation against SAR AEs with region-constrained perturbation. To solve this problem, we treated SAR AEs as low-probability samples incompatible with the clean dataset. With the help of energy-based models, we captured an inherent energy gap between SAR AEs and clean samples that is robust to the changes of the perturbation region. Inspired by this discovery, we propose an energy-based adversarial detector, which requires no modification to a pretrained model. To better distinguish the clean samples and AEs, energy regularization was adopted to fine-tune the pretrained model. Experiments demonstrated that the proposed method significantly boosts the detection performance against SAR AEs with region-constrained perturbation

    Multi-Objective Optimization of CNC Turning Process Parameters Considering Transient-Steady State Energy Consumption

    No full text
    Energy-saving and emission reduction are recognized as the primary measure to tackle the problems associated with climate change, which is one of the major challenges for humanity for the forthcoming decades. Energy modeling and process parameters optimization of machining are effective and powerful ways to realize energy saving in the manufacturing industry. In order to realize high quality and low energy consumption machining of computer numerical control (CNC) lathe, a multi-objective optimization of CNC turning process parameters considering transient-steady state energy consumption is proposed. By analyzing the energy consumption characteristics in the process of machining and introducing practical constraints, such as machine tool equipment performance and tool life, a multi-objective optimization model with turning process parameters as optimization variables and high quality and low energy consumption as optimization objectives is established. The model is solved by non-dominated sorting genetic algorithm-II (NSGA-II), and the pareto optimal solution set of the model is obtained. Finally, the machining process of shaft parts is studied by CK6153i CNC lathe. The results show that 38.3% energy consumption is saved, and the surface roughness of workpiece is reduced by 47.0%, which verifies the effectiveness of the optimization method

    The Effect of Solution-Focused Group Counseling Intervention on College Students' Internet Addiction: A Pilot Study

    Get PDF
    This pilot study aimed to explore the effect of solution-focused group counseling intervention on Internet addiction among college students. Eighteen college students participated in this study, out of which nine subjects were assigned into the experimental group and the rest (n = 9) to a control group. The experimental group received group counseling for five weeks, while the control group did not receive any intervention. The revised version of the Chinese Internet Addiction Scale (CIAS-R) was used to capture pre-test and post-test excessive use in the two groups. The experimental group was also subjected to a follow-up test and self-reported Internet addiction scores six months after the end of group counseling. Results showed that after the five-week solution-focused group counseling, the scores of four dimensions of the CIAS-R in the experimental group had CIAS-R decreased, and the reduction trend of the total score of CIAS-R was similar across all subjects in this group. The treatment effect was larger than the placebo reduction in the control group in two dimensions: compulsive and withdrawal (Sym-C &amp; Sym-W) and tolerance (Sym-T) symptoms. Qualitative research confirmed the conclusions from the quantitative data, showing that the experimental group reduced its Internet addiction symptoms. Overall, the findings suggested that solution-focused group counseling had positive intervention effects on Internet addiction.</p

    Practical Lossless Federated Singular Vector Decomposition over Billion-Scale Data

    Full text link
    With the enactment of privacy-preserving regulations, e.g., GDPR, federated SVD is proposed to enable SVD-based applications over different data sources without revealing the original data. However, many SVD-based applications cannot be well supported by existing federated SVD solutions. The crux is that these solutions, adopting either differential privacy (DP) or homomorphic encryption (HE), suffer from accuracy loss caused by unremovable noise or degraded efficiency due to inflated data. In this paper, we propose FedSVD, a practical lossless federated SVD method over billion-scale data, which can simultaneously achieve lossless accuracy and high efficiency. At the heart of FedSVD is a lossless matrix masking scheme delicately designed for SVD: 1) While adopting the masks to protect private data, FedSVD completely removes them from the final results of SVD to achieve lossless accuracy; and 2) As the masks do not inflate the data, FedSVD avoids extra computation and communication overhead during the factorization to maintain high efficiency. Experiments with real-world datasets show that FedSVD is over 10000 times faster than the HE-based method and has 10 orders of magnitude smaller error than the DP-based solution on SVD tasks. We further build and evaluate FedSVD over three real-world applications: principal components analysis (PCA), linear regression (LR), and latent semantic analysis (LSA), to show its superior performance in practice. On federated LR tasks, compared with two state-of-the-art solutions: FATE and SecureML, FedSVD-LR is 100 times faster than SecureML and 10 times faster than FATE.Comment: 10 page
    corecore